68 research outputs found
Fast quantum subroutines for the simplex method
We propose quantum subroutines for the simplex method that avoid classical
computation of the basis inverse. For an constraint matrix with at
most nonzero elements per column, at most nonzero elements per column
or row of the basis, basis condition number , and optimality tolerance
, we show that pricing can be performed in
time, where the
notation hides polylogarithmic factors. If the ratio is
larger than a certain threshold, the running time of the quantum subroutine can
be reduced to . The steepest edge pivoting rule also admits a quantum
implementation, increasing the running time by a factor .
Classically, pricing requires
time in the worst case using the fastest known algorithm for sparse matrix
multiplication, and with steepest
edge. Furthermore, we show that the ratio test can be performed in
time, where
determine a feasibility tolerance; classically, this requires time in
the worst case. For well-conditioned sparse problems the quantum subroutines
scale better in and , and may therefore have a worst-case asymptotic
advantage. An important feature of our paper is that this asymptotic speedup
does not depend on the data being available in some "quantum form": the input
of our quantum subroutines is the natural classical description of the problem,
and the output is the index of the variables that should leave or enter the
basis.Comment: Added discussion on condition number and infeasibilitie
On the implementation of a global optimization method for mixed-variable problems
We describe the optimization algorithm implemented in the open-source
derivative-free solver RBFOpt. The algorithm is based on the radial basis
function method of Gutmann and the metric stochastic response surface method of
Regis and Shoemaker. We propose several modifications aimed at generalizing and
improving these two algorithms: (i) the use of an extended space to represent
categorical variables in unary encoding; (ii) a refinement phase to locally
improve a candidate solution; (iii) interpolation models without the
unisolvence condition, to both help deal with categorical variables, and
initiate the optimization before a uniquely determined model is possible; (iv)
a master-worker framework to allow asynchronous objective function evaluations
in parallel. Numerical experiments show the effectiveness of these ideas
A local branching heuristic for MINLPs
Local branching is an improvement heuristic, developed within the context of
branch-and-bound algorithms for MILPs, which has proved to be very effective in
practice. For the binary case, it is based on defining a neighbourhood of the
current incumbent solution by allowing only a few binary variables to flip
their value, through the addition of a local branching constraint. The
neighbourhood is then explored with a branch-and-bound solver. We propose a
local branching scheme for (nonconvex) MINLPs which is based on iteratively
solving MILPs and NLPs. Preliminary computational experiments show that this
approach is able to improve the incumbent solution on the majority of the test
instances, requiring only a short CPU time. Moreover, we provide algorithmic
ideas for a primal heuristic whose purpose is to find a first feasible
solution, based on the same scheme
Rounding-based heuristics for nonconvex MINLPs
We propose two primal heuristics for nonconvex mixed-integer nonlinear programs. Both are based on the idea of rounding the solution of a continuous nonlinear program subject to linear constraints. Each rounding step is accomplished through the solution of a mixed-integer linear program. Our heuristics use the same algorithmic scheme, but they differ in the choice of the point to be rounded (which is feasible for nonlinear constraints but possibly fractional) and in the linear constraints. We propose a feasibility heuristic, that aims at finding an initial feasible solution, and an improvement heuristic, whose purpose is to search for an improved solution within the neighborhood of a given point. The neighborhood is defined through local branching cuts or box constraints. Computational results show the effectiveness in practice of these simple ideas, implemented within an open-source solver for nonconvex mixed-integer nonlinear programs
Core Routing on Dynamic Time-Dependent Road Networks
Route planning in large scale time-dependent road networks is an important practical application of the shortest paths problem that greatly benefits from speedup techniques. In this paper we extend a two-level hierarchical approach for pointto-point shortest paths computations to the time-dependent case. This method, also known as core routing in the literature for static graphs, consists in the selection of a small subnetwork where most of the computations can be carried out, thus reducing the search space. We combine this approach with bidirectional goal-directed search in order to obtain an algorithm capable of finding shortest paths in a matter of milliseconds on continental sized networks. Moreover, we tackle the dynamic scenario where the piecewise linear functions that we use to model time-dependent arc costs are not fixed, but can have their coefficients updated requiring only a small computational effort
- …